There is a darker side to artificial intelligence, including the use of that technology to create explicit images of children. It’s disturbing to think about.

But what does the law say? Are sexual abuse images of children created by AI illegal in New York? An Albany law professor says there isn’t a straight answer to this question.

Dan Sexton is the technology officer for the Internet Watch Foundation, one of three organizations in the world licensed to search the internet for child sexual abuse content and remove them. Sexton said the proliferation of synthetic child sexual abuse images on the internet is getting worse, and faster than expected.

“Bad actors might abuse those technologies. AI just caught us off guard here,” Sexton said.

He’s sounding the alarm in a recent report that outlined thousands of images created by AI in just one month on the dark web. He said this could be a major struggle for law enforcement to keep up with the sheer volume. Recently, 50 attorneys general sent a letter to Congress to try to get ahead of the problem.

“They asked Congress to create a committee, a special blue-ribbon panel to study this exact problem, artificial intelligence, image iteration of child sexual abuse materials and to recommend solutions to address it,” said Antony Haynes, a professor of law at Albany Law School.

Haynes is also the director of cybersecurity and privacy, with a focus now on AI. He said these images are raising some legal questions as well.

“So, I worry about my children and my female relatives,” Haynes said. “I think the lack of clarity in the law exposes them to a risk of people trying to navigate this gray area and take advantage of it.”

The gray area comes from a US Supreme Court decision in 2002, when a law that protected kids from pornography was challenged. That law said both real victims' images and computer-generated depictions were illegal.

“The black letter law says if you depict a minor engaged in a sexual act, that’s a crime. What’s confusing is what happens when there’s not an actual child involved. Is that going to be illegal, too?” Haynes said. “The Supreme Court said if an actual child is not involved in the creation of this imagery, then that is protected speech. You can do that.”

It’s protected speech under the First Amendment to the Constitution. But there’s also a federal law now which states any hyper-realistic sexual images of children is illegal. Haynes said that law hasn’t been tested yet against the 2002 Supreme Court ruling.

Meanwhile, New York is trying to close the legal gap. Haynes said the state proposed legislation. In, addition, AI software companies are also trying to do their part.

“There’s some steps happening right now, and I think they’re a good start,” Haynes said. “But they’re not enough.”

However, Haynes said the big picture is the world needs to have a safety regime for software systems, particularly for artificial intelligence.

“Thinking about the need for construction standards in software. So, when you build a house, you have to make it satisfy your code. You have inspectors for water, electricity. There is no housing code for software,” he said. “Our regulations tend to emphasize market-driven solutions, which often allows for greater innovation and corporate profits. The risk, though, in our rush to allow for innovation: There’s these externalities, these side effects.”